6.2.1 Boomi Cloud API Management – Local Edition February 2026 Release Notes
Upcoming Product Retirement Notice:
Boomi Cloud API Management - Local Edition 5.6.2 will be officially retired and no longer supported after March 31, 2026. Refer to the Migration and Upgrade Guide to upgrade to the latest 6.x version.
For any questions about the retirement process, refer to this community article.
Deploying this release on-prem requires the following supported component versions:
- Kubernetes: 1.33
- Red Hat OpenShift: 4.19
- Helm: 3.15.0
- Docker: 28.3.0
- Podman: 4.9.4
-
Added health monitoring liveliness probes for cache pods. These health-monitoring liveliness probes help the system automatically detect and restart unhealthy cache pods, thereby improving its auto-recovery capabilities. (EIN-22317)
-
Previously, the AWS RDS proxy connection with AWS Aurora DB failed because the Config UI component could not connect due to an SSL certificate error. To resolve this issue, we upgraded the MySQL driver in Cloud API Management - Local Edition to support AWS RDS proxy connections with AWS Aurora DB. (WA-16704)
-
Added two fields,
requestIdandthreadId, to Traffic Manager (TM) proxy logs. With these fields:-
Access logs now include threadId under the key thread.
-
requestIdlets you easily trace requests. -
threadIdhelps diagnose thread-specific issues (deadlocks, resource contention) in TM proxy logs. (EIN-23203)
-
- Added Helm chart tolerations to
loader-cronjob.yamlandpreinstall-job.yamlfor Kubernetes jobs. With this, all Kubernetes jobs andcronjobscan leverage tolerations,topologySpreadConstraints, affinity, and node selection to ensure proper pod scheduling on tainted nodes. (EIN-22819)
-
Cache pod logs experienced frequent SQL errors that prevented them from connecting to the database. This was due to a connection pool management issue. When the connection pool reaches its maximum capacity, addConnectionRequest returns null because all existing connections are still marked active in the pool (not returned to the pool), and no idle connections are available for reuse. This issue is now resolved. (EIN-23062)
-
The filenames of the 6.2.0 migration build package were incorrectly labeled as 6.1.0. We have updated it with the 6.2.0 label in this release (
migrate_4x_to_6_APIM620_9.tar.gzandmigrate_5x_to_6_APIM620_65.tar.gz). (EIN-22744) -
The request_id was missing from the Traffic Manager logs, affecting traceability and slowing down debugging. This issue is now resolved. (EIN-22753)
-
The 6.1 Local Edition backend server response included a duplicate
Serverheader key. Usually, only the headers that are part of the backend response should be present in the response. This issue has been fixed by removing the duplicate headers from the response. (EIN-22971) -
awsas an email configuration is no longer supported. Refer to the Values.global.email.mail.transport.protocol section in 6.2.1 Values.yaml. (EIN-23309) -
The
apim.standard_labelsparameter was missing fromloader-cronjob.yamlinloader-job-full,loader-job-delta,loader-job-onprem, andlogsync-job. This issue has been resolved. (EIN-23199) -
Helm upgrade failed to restart pods after updating
ConfigMapvalues because duplicate checksum annotation keys were observed in the PlatformAPI, Loader, and TrafficManager deployment templates. This issue has been resolved. (EIN-22993) -
The traffic manager pods were failing because cache items used in liveliness probes were expiring. To resolve this, the TTL for the loader timestamp object was set to 25 (
loaderCompletionTTL = 25) in the loaderconfigmap. (EIN-23192) -
The following issues were fixed on the Configuration Manager > Verbose Logging page:
-
Previously, the Service, Endpoint, and Package Keys fields had display limits that hid the complete list of services, endpoints, and package keys. To address this, the display limits for these fields were increased, ensuring the entire list is now shown. (WA-17222) (WA-17223)
-
The Verbose Logging page used to take longer to load when more APIs needed to be fetched. This issue has now been resolved. (WA-17220)
-
-
Intermittent 504 Gateway Timeout errors occurred because timeout threads were not canceled after request completion. This caused thread-interruption leakage, affecting unrelated requests at various stages of processing. This issue has now been fixed. (EIN-23163)
-
The hourly quota did not reset as expected, resulting in subsequent requests failing with 403 errors. This issue has been fixed now. (EIN-23371)
-
Earlier, in the access logs for 6.1.0 and later builds, the
cluster_namefield was empty. This issue has been fixed, and you can now seecluster_namein the access logs. (EIN-22682)